62 research outputs found

    Discretization and Bayesian modeling in inverse problems and imaging

    Get PDF
    In this thesis the Bayesian modeling and discretization are studied in inverse problems related to imaging. The treatise consists of four articles which focus on the phenomena that appear when more detailed data or a priori information become available. Novel Bayesian methods for solving ill-posed signal processing problems in edge-preserving manner are introduced and analysed. Furthermore, modeling photographs in image processing problems is studied and a novel model is presented

    Inverse scattering for a random potential

    Get PDF
    In this paper we consider an inverse problem for the nn-dimensional random Schr\"{o}dinger equation (Δ−q+k2)u=0(\Delta-q+k^2)u = 0. We study the scattering of plane waves in the presence of a potential qq which is assumed to be a Gaussian random function such that its covariance is described by a pseudodifferential operator. Our main result is as follows: given the backscattered far field, obtained from a single realization of the random potential qq, we uniquely determine the principal symbol of the covariance operator of qq. Especially, for n=3n=3 this result is obtained for the full non-linear inverse backscattering problem. Finally, we present a physical scaling regime where the method is of practical importance.Comment: Previous version 48 pages; Current version 51 pages, 3 figures, several references have been adde

    Hyperparameter Estimation in Bayesian MAP Estimation: Parameterizations and Consistency

    Get PDF
    The Bayesian formulation of inverse problems is attractive for three primary reasons: it provides a clear modelling framework; means for uncertainty quantification; and it allows for principled learning of hyperparameters. The posterior distribution may be explored by sampling methods, but for many problems it is computationally infeasible to do so. In this situation maximum a posteriori (MAP) estimators are often sought. Whilst these are relatively cheap to compute, and have an attractive variational formulation, a key drawback is their lack of invariance under change of parameterization. This is a particularly significant issue when hierarchical priors are employed to learn hyperparameters. In this paper we study the effect of the choice of parameterization on MAP estimators when a conditionally Gaussian hierarchical prior distribution is employed. Specifically we consider the centred parameterization, the natural parameterization in which the unknown state is solved for directly, and the noncentred parameterization, which works with a whitened Gaussian as the unknown state variable, and arises when considering dimension-robust MCMC algorithms; MAP estimation is well-defined in the nonparametric setting only for the noncentred parameterization. However, we show that MAP estimates based on the noncentred parameterization are not consistent as estimators of hyperparameters; conversely, we show that limits of finite-dimensional centred MAP estimators are consistent as the dimension tends to infinity. We also consider empirical Bayesian hyperparameter estimation, show consistency of these estimates, and demonstrate that they are more robust with respect to noise than centred MAP estimates. An underpinning concept throughout is that hyperparameters may only be recovered up to measure equivalence, a well-known phenomenon in the context of the Ornstein-Uhlenbeck process.Comment: 36 pages, 8 figure
    • …
    corecore